-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: improve performance of update metrics #1329
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1329 +/- ##
=============================================
- Coverage 56.12% 33.92% -22.20%
- Complexity 976 983 +7
=============================================
Files 119 125 +6
Lines 11743 48515 +36772
Branches 2251 10628 +8377
=============================================
+ Hits 6591 16460 +9869
- Misses 4012 28725 +24713
- Partials 1140 3330 +2190 ☔ View full report in Codecov by Sentry. |
Although the proportion of udpate metric in cpu profile has been greatly reduced, the tpcds/tpch benchmark of small data set has not been improved. |
} | ||
// add children | ||
spark_plan.children().iter().for_each(|child_plan| { | ||
let child_node = to_native_metric_node(child_plan).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we avoid the unwrap
here?
let child_node = to_native_metric_node(child_plan).unwrap(); | |
let child_node = to_native_metric_node(child_plan)?; |
@mbutrovich may be interested in reviewing this as well |
@@ -508,9 +505,6 @@ pub unsafe extern "system" fn Java_org_apache_comet_Native_executePlan( | |||
let next_item = exec_context.stream.as_mut().unwrap().next(); | |||
let poll_output = exec_context.runtime.block_on(async { poll!(next_item) }); | |||
|
|||
// Update metrics | |||
update_metrics(&mut env, exec_context)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should add a config so that we can choose between frequent metrics updates vs just updating once the query completes. It can sometimes be helpful to see live metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per-batch is probably always overkill. For long-running jobs is there a period that makes sense? It looks like Spark History defaults to 10s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do like the idea of updating metrics every N seconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think checking a coarse-grained clock (i.e., CLOCK_MONOTONIC_COARSE
) to see if N seconds has elapsed to produce updated metrics would be a reasonable compromise on performance impact vs. fresh metrics.
Based on a single run of TPC-H @ 100GB, I see approximately 2% improvement in TPC-H (325s on main vs 318s with this PR) |
Which issue does this PR close?
Closes #1328.
Rationale for this change
Improve performance of update metrics
What changes are included in this PR?
How are these changes tested?
after this
sql metrics are displayed correctly:
cpu profile: